Goto

Collaborating Authors

 regional feature






Visualizing the Emergence of Intermediate Visual Patterns in DNNs: Supplementary Material

Neural Information Processing Systems

This work was done under the supervison of Dr. Quanshi Zhang. Please see Section G for details of the dataset, and the selection of sample features and regional features. Eq. (3) of the paper, we assume that all features This section provides detailed derivations on the learning of the mixture model in Section 3.2 of the Therefore, the optimization can be derived as follows. This section provides more discussions on the quantification of knowledge points. According to Section 3.4 of the paper, a regional feature is a knowledge point if it is discriminative enough for classification, i.e.



Cross-Image Context for Single Image Inpainting - Supplementary Material - Tingliang Feng, Wei Feng, Weiqi Li, Di Lin College of Intelligence and Computing, Tianjin University

Neural Information Processing Systems

We use the PyTorch toolkit to implement our inpainting network with CICM. The network is optimized by the Adam solver for 400,000 iterations. The initial learning rate is 0.0001, which is linearly decayed during the network training. We randomly crop and flip the training images to augment the data. In our implementation, we use a warm-up strategy to pre-train the backbone network for 50,000 iterations.



Causality Enhanced Origin-Destination Flow Prediction in Data-Scarce Cities

Feng, Tao, Zhang, Yunke, Wang, Huandong, Li, Yong

arXiv.org Artificial Intelligence

Accurate origin-destination (OD) flow prediction is of great importance to developing cities, as it can contribute to optimize urban structures and layouts. However, with the common issues of missing regional features and lacking OD flow data, it is quite daunting to predict OD flow in developing cities. To address this challenge, we propose a novel Causality-Enhanced OD Flow Prediction (CE-OFP), a unified framework that aims to transfer urban knowledge between cities and achieve accuracy improvements in OD flow predictions across data-scarce cities. In specific, we propose a novel reinforcement learning model to discover universal causalities among urban features in data-rich cities and build corresponding causal graphs. Then, we further build Causality-Enhanced Variational Auto-Encoder (CE-VAE) to incorporate causal graphs for effective feature reconstruction in data-scarce cities. Finally, with the reconstructed features, we devise a knowledge distillation method with a graph attention network to migrate the OD prediction model from data-rich cities to data-scare cities. Extensive experiments on two pairs of real-world datasets validate that the proposed CE-OFP remarkably outperforms state-of-the-art baselines, which can reduce the RMSE of OD flow prediction for data-scarce cities by up to 11%.


CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting and Anchor Pre-Matching

Wu, Xiaoshi, Zhu, Feng, Zhao, Rui, Li, Hongsheng

arXiv.org Artificial Intelligence

Open-vocabulary detection (OVD) is an object detection task aiming at detecting objects from novel categories beyond the base categories on which the detector is trained. Recent OVD methods rely on large-scale visual-language pre-trained models, such as CLIP, for recognizing novel objects. We identify the two core obstacles that need to be tackled when incorporating these models into detector training: (1) the distribution mismatch that happens when applying a VL-model trained on whole images to region recognition tasks; (2) the difficulty of localizing objects of unseen classes. To overcome these obstacles, we propose CORA, a DETR-style framework that adapts CLIP for Open-vocabulary detection by Region prompting and Anchor pre-matching. Region prompting mitigates the whole-to-region distribution gap by prompting the region features of the CLIP-based region classifier. Anchor pre-matching helps learning generalizable object localization by a class-aware matching mechanism. We evaluate CORA on the COCO OVD benchmark, where we achieve 41.7 AP50 on novel classes, which outperforms the previous SOTA by 2.4 AP50 even without resorting to extra training data. When extra training data is available, we train CORA$^+$ on both ground-truth base-category annotations and additional pseudo bounding box labels computed by CORA. CORA$^+$ achieves 43.1 AP50 on the COCO OVD benchmark and 28.1 box APr on the LVIS OVD benchmark.